Saturday, April 28, 2012

Cookbook SAS Codes for Bioequivalence Test in 2x2x2 Crossover Design

In Clinical Pharmacology, inferential statistics is performed to show the bioequivalence in terms of the Area Under the Curve (AUC) and the Maximum Concentration (Cmax) that are obtained from the time-concentration data. The typical clinical trial design is 2x2x2 crossover design contains two treatment sequences (Test followed by Reference vs. Reference followed by Test), two treatment periods (period 1 vs period 2), and two treatment groups (Test vs. Reference).

According to FDA’s guidance “Statistical Approaches to Establishing Bioequivalence”, the following assumptions can be made for the test of bioequivalence:

1. AUC and Cmax follow log-normal distribution

2. Bioequivalence is shown if the 90% confidence interval for the geometric least square mean ratio of Test/Reference is fall within 0.8 and 1.25

The statistical tests will follow so called two one-sided tests procedure (TOST) which can be implemented using the following cookbook SAS codes.

* Preparing the data and log-transfer the AUC and Cmax data;
data pkparm;
    set pdkparm;
    keep subno seqence treat period AUC CMAX;

*** Fit the ANOVA model;
ods output LSMeans=lsmean;
ods output estimates=est;
proc mixed data=pkparm;
      class sequence period treat subno;
      model LAUC or Cmax=sequence period treat;
      random subno(sequence);
      lsmeans treat/pdiff cl alpha=0.1;
      estimate 'T/R' treat -1 1 / cl alpha=0.1;
     * make 'LSMEANS' out=lsmean; *used in old SAS versions;
     * make 'estimate' out=est; *used in old SAS versions;

* Anti-log transformation to obtain the Geometric Means;
data lsmean;
      set lsmean;
      gmean=exp(estimate); *Geometric means;

proc print data=lsmean;

* Anti-log transformation to obtain the ratio of Geometric Means (point estimate) and its 90% confidence interval (lower and upper bounds);
data diffs;
     set EST;
     ratio=exp(estimate); ** Ratio of geometric mean;
     lower=exp(lower); ** 90% CI lower bound;
     upper=exp(upper); ** 90% CI upper bound;

proc print data=diffs;

Some Notes:

1. p-value for Treatment/Reference comparison can also be obtained from the model (in above EST data set). However, p-value is not the criteria for declaring the bioequivalence and must be interpreted appropriately. We could have a significant p-value (p<0.05) and still show bioequivalence as long as the 90% confidence interval of the geometric mean ratio is fall within 0.8 and 1.25 range. If we have a 90% confidence interval like [0.85, 0.95] or [1.05, 1.15], the bioequivalence will be shown even though the p-values are significant.

2. In SAS Proc Mixed model, the subject within sequence is coded as subno(seqence), not sequence(subno). However, if you use sequence(subno), the results will be the same.

3. For log transformation, it does not matter which base (base 10, 5, or e (natural log)) as long as the final results from the model are correctly an-log transferred back.

4. while we typically say ‘Ratio of geometric mean’, it is actually the ‘ratio of geometric least square mean’ from the model.

5. FDA guidance "Statistical Approaches to Establishing Bioequivalence" appendix E "SAS Program Statements for Average BE Analysis of Replicated Crossover Studies" provided the detail SAS codes with Proc Mixed. While it is stated for the 'replicated crossover studies', however, 2x2x2 crossover design is a simplest case of the replicated crossover studies.

The following illustrates an example of program statements to run the average BE analysis using
PROC MIXED in SAS version 6.12, with SEQ, SUBJ, PER, and TRT identifying sequence,
subject, period, and treatment variables, respectively, and Y denoting the response measure (e.g., log(AUC), log(Cmax)) being analyzed:

ESTIMATE 'T vs. R' TRT 1 -1/CL ALPHA=0.1;

The Estimate statement assumes that the code for the T formulation precedes the code for the R formulation in sort order (this would be the case, for example, if T were coded as 1 and R were coded as 2). If the R code precedes the T code in sort order, the coefficients in the Estimate statement would be changed to -1 1.

In the random statement, TYPE=FA0(2) could possibly be replaced by TYPE=CSH. This guidance recommends that TYPE=UN not be used, as it could result in an invalid (i.e., not non-negative definite) estimated covariance matrix.
Additions and modifications to these statements can be made if the study is carried out in more than one groups of subjects

Thursday, April 19, 2012

The "PATIENTS' FDA" Act - Sens. Richard Burr and Tom Coburn Introduce a New Plan to Reform the FDA

In my previous article "Should the design and conduct of clinical trials be simplified? ", I discussed several FDA guidance that suggested that in several areas, the dada collections may be reduced and the clinical trial monitoring may need to switch to the risk-based approach instead of the current frequent on-site visits and 100% source data verification.

Coincidently, yesterday, Sens. Richard Burr and Tom Coburn introduced a new plan to reform the FDA - The "PATIENTS' FDA" Act . The patient' FDA act (if approved) will force FDA to be further transparent and to be mindful in requesting too much data from the pharmaceutical companies. For last several years, after several high-profile drug withdrawals (Vioxx, Avandia for example), FDA has swung to another extreme and become very conservative, which subsequently made the clinical trials more difficult to execute and drug development  more costly. Perhaps, it is really not the FDA's intension, however, many of its staff/reviewers become too conservative. Instead of working with the industry to bring the new medications to the patients with the reduced cost and within the reasonable timeframe, some reviewers request the sponsors to collect data with no real justification and ask the sponsors to implement something that may just be for reviewer's own interest or opinion.   

Forbes published a good article as a companion to this bill. Here are some of the paragraphs from this article.

More accountability for meeting drug-review deadlines. The FDA has been increasingly failing to meet its PDUFA-mandated deadlines for giving companies approval decisions on new drug applications. The PATIENTS’ FDA Act would require the FDA to “report [to Congress] on a deeper level detail with respect to the performance goals agreed to in the prescription drug, generic drug, and biosimilar user fee agreements,” and hold individual reviewers accountable for their speed in reviewing applications.
stop forcing companies to do unnecessary and expensive busywork. The bill’s summary notes that “some FDA reviewers request reams of additional information about a drug or device that is beyond the scope of data needed to meet the FDA’s approval standard.” The FDA will be required, under the bill, to “document the scientific and regulatory rationale” for such decisions, and review within one year “the costs and adoption of the least burdensome approaches to regulation.” The bill would also codify the FDA’s “commitment to improve on patient risk-benefit considerations…to ensure accountability for fulfilling…the user fee agreements.”
 Take more advantage of clinical trials in other countries. The bill would require FDA to work with “other specific regulatory authorities of similar standing” to encourage uniform standards for clinical trials. (The Geneva-based International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use, or ICH, performs many of these functions.) FDA will also be instructed to help sponsors “minimize the need for duplication of clinical studies, preclinical studies or non-clinical studies.”

Sunday, April 15, 2012

Should the design and conduct of clinical trials be simplified?

As mentioned in one of FDA’s guidance, “in the past two decades, the number and complexity of clinical trials have grown dramatically. These changes create new challenges in clinical trial oversight such as increased variability in investigator experience, ethical oversight, site infrastructure, treatment choices, standards of health care, and geographic dispersion.”. According to an article by Mr Getz titled “The Heavy Burden of Protocol Design More complex and demanding protocols are hurting clinical trial performance and success”, companies sponsoring clinical research have openly acknowledged that protocol design negatively impacts clinical trial performance and may well be the single largest source of delays in getting studies completed. When designing a clinical trial, sponsors often try to include too many endpoints and too many measures hoping that all of these endpoints (if results are good) will contribute to the overall evidence of the clinical efficacy. The sponsors attempt to collect a lot of information that is not must-to-have, but nice-to-have (for future data dredging, marketing, publications,…). We want to do it all in one clinical study. This obviously increases the complexity of the study protocol that subsequently increases the length of the clinical trial, the cost of the clinical trial, and the quality (more protocol incompliance) of clinical trial data.
In terms of the conduct of the clinical trials, emphasis on the compliance of good clinical practice has resulted in perceptions that the clinical trial data must be 100% monitored and source-verified, all data programming and analysis must be independently validated, over-reporting adverse events must be requirement of the GCP compliance…
Over the last two years, FDA has issues several guidance in an attempt to change these perceptions.
In FDA’s new draft guidance "Determining the extent of safety data collection needed in late stage premarket and post-approval clinical investigations", it states that its intention is “to assist clinical trial sponsors in determining the amount and types of safety data to collect in late-stage premarket and post-market clinical investigations for drugs or biological products, based on existing information about a product’s safety profile.” This new guidance addresses the circumstances in which it may be acceptable to acquire a reduced amount of safety information during clinical trials. In some situations, excessive data collection may be unnecessary and not helpful.
In Guidance for Industry “Oversight of Clinical Investigations — A Risk-Based Approach to Monitoring”, FDA encourages the sponsors to implement the alternative clinical monitoring instead of the only way of on-site monitoring. The webinar and presentations slides can be found at
“For major efficacy trials, companies typically conduct on-site monitoring visits at approximately four- to eight-week intervals,8 at least partly because of the perception that the frequent on-site monitoring visit model, with 100% verification of all data, is FDA’s preferred way for sponsors to meet their monitoring obligations. In contrast, academic coordinating centers, cooperative groups, and government organizations use on-site monitoring less extensively. For example, some government agencies and oncology cooperative groups typically visit sites only once every two or three years to qualify/certify clinical study sites to ensure they have the resources, training, and safeguards to conduct clinical trials. FDA also recognizes that data from critical outcome studies (e.g., many National Institutes of Health-sponsored trials, Medical Research Council-sponsored trials in the United Kingdom, International Study of Infarct Survival, and GISSI), which had no regular on-site monitoring and relied largely on centralized and other alternative monitoring methods, have been relied on by regulators and practitioners. These examples demonstrate that use of alternative monitoring approaches should be considered by all sponsors, including commercial sponsors when developing risk-based monitoring strategies and plans”
Many sponsors may be reluctant to adopt this guidance and stick with the status quo approach of frequent on-site visits with 100% verification of all data. They may be worried that loosing the clinical monitoring could incur the incompliance.

In Guidance for Clinical Investigators, Sponsors, and IRBs Adverse Event Reporting to IRBs — Improving Human Subject Protection, FDA advised the sponsors to report to IRB (and FDA presumably) the AE only if it were unexpected, serious, and would have implications for the conduct of the study, not all unanticipated AEs. The sponsor should analyze the unanticipated AEs before reporting.
“the practice of local investigators reporting individual, unanalyzed events to IRBs, including reports of events from other study sites that the investigator receives from the sponsor of a multi-center study—often with limited information and no explanation of how the event represents an unanticipated problem—has led to the submission of large numbers of reports to IRBs that are uninformative. IRBs have expressed concern that the way in which investigators and sponsors of IND studies typically interpret the regulatory requirement to inform IRBs of all "unanticipated problems" does not yield information about adverse events that is useful to IRBs and thus hinders their ability to ensure the protection of human subjects.”
There are many other areas in clinical trial practices that can and should be simplified. For example, some protocols instruct investigators to record and report all untoward events that occur during a study as AEs/SAEs, which could include common symptoms of the disease under study and/or other expected clinical outcomes that are not study endpoints. Over-reporting of AE/SAEs can incur additional burdens and can dilute or obscure signal identification. Another example is to spend too much efforts on the screening failure subjects. It is true that the recording of the adverse events starts once the informed consent form is signed. However it is unnecessary to write a full-blown SAE narrative for a screening failure subject that has nothing to do with the assessement of the safety of the experimental product.