Saturday, May 26, 2012

SAS Resources on the Web

To be a good statistician, mastering the programming (SAS, SPSS, R, ...) is very important. This is especially true for statisticians who are working in the pharmaceutical/biotech/CRO industry. While the other statistical software may be popular in other area, SAS is still dominant in the pharmaceutical/biotech/CRO industry.

Fortunately, there are a lot of resources for SAS programming. The following links are worth being bookmarked.

Sunday, May 20, 2012

Log(x+1) Data Transformation


When performing the data analysis, sometimes the data is skewed and not normal-distributed, and the data transformation is needed. We are very familiar with the typically data transformation approaches such as log transformation, square root transformation. As a special case of logarithm transformation, log(x+1) or log(1+x) can also be used.

The first time I had to use log(x+1) transformation is for a dose-response data set where the dose is in exponential scale with a control group dose concentration of zero. The data set is from a so-called Whole Effluent Toxicity Test. The Whole Effluent Toxicity test, one of the aquatic toxicological experiments, has been used by the US Environmental Protection Agency (USEPA) to identify effluents and receiving waters containing toxic materials, and to estimate the toxicity of waster water. In the Whole Effluent Toxicity testing, many different species and several endpoints are used to measure the aggregate toxic effect of an effluent. For many of these biological endpoints, toxicity is manifested as a reduction in the response relative to the control group. The whole Effluent toxicity testing is often designed as multi-concentrations, and includes a minimum of five concentrations of effluent and one control group. Therefore, from a dose-response analysis standpoint, the control group dose is considered as zero and the various concentrations are designed in exponential scale. Prior to the analysis, the log transformation for the dose, log(x), is usually applied. Since the control group dose is considered zero and log(x) does not exist, an easy solution is to use log(x+1). For the control group, the log(0+1) = 0, which seems to be a perfect approach in this case.

However, in clinical trials, I have seen many applications of the log-transformation, but not the log(x+1) transformation. From the FDA website, I could only find one study where the log(1+x) transformation was used. In advisory committee meeting document for AZ’s Drug Esomeprazole, the statistical analysis for the primary endpoint was stated as:
"The primary endpoint, change from baseline in signs and symptoms of GERD observed from video and cardiorespiratory monitoring, was analyzed by ANCOVA. Prior to the analysis, the number of events at baseline and final visit were normalized (to correspond to 8 hours observation time) then log-transformed via a log(1+x) transformation. The ANCOVA of change from baseline on the log-scale was adjusted for treatment and baseline. The least square means (lsmeans) for each treatment group were transformed and expressed as estimated percentage changes from baseline, and the lsmean for the esomeprazole treatment effect was transformed similarly, and expressed as a percentage difference from placebo, which was presented with the associated 2-sided 95% CI and p-value. "

Recently I read an article by Lachin et al “Sample size requirements for studies of treatment effects on Beta-cell function in newly diagnosed type 1 diabetes”, where various data transformation techniques were compared and the log(x+1) and sqrt(x) (square root of x) were suggested for the primary endpoint of C-peptide AUC mean. According to the paper “Most C-peptide values will fall between 0 and 1 and the distribution is positively skewed. Thus, scale-contracting transformations were considered. However, the log transformation could introduce negative skewness because log(x) approaches negative infinity as the value x approaches zero. This can be corrected by using log(x+1)"

When discussing with my friend, Dr Song, from CDC, we came up with the following Q&A regarding the use of  log(x+1) transformation:

Q: Is log(x+1) a fine approach for data transformation?
A: it’s fine to use ln(x+1) as long as this transformation makes data normal and variance relatively constant.

Q: Since the reason for using log(x+1) transformation is to avoid the log(x) approaching negative infinity as the x approaches zero. Could we change the measurement unit from pmol/mL to pmol/dL (1 pmol/dL = 100 pmol/mL)?
A: Log(100x) = log(100) + log(x). It only makes transformed value positive and it does not change the normality and variability. From statistical point of view, it is the same or equivalent to the transformation of log(x).

Q: With log(x), square root of x,…transformation, we can essentially transfer the calculated values or estimates back to the originally scale. With log(x+1), do we have a problem to convert the calculated values or estimates back to the original scale?
A: According to the paper by Lachin, “For each transformation y=f(x), the mean values and confidence limits are presented using the inverse transformation applied to the mean of the transformed values, and the corresponding confidence limits. Thus, for an analysis using y=log(x), the inverse mean is the geometric mean exp(mean y). For an analysis using y=log(x+1), the inverse mean is the geometric-like mean exp(mean y) - 1. For an analysis using y=sqrt(x), the inverse mean is (mean y)**2.”

Q: Whether or not one transformation approach is better than another depending on the range of the values?
A: log(x+1) transformation is often used for transforming data that are right-skewed, but also include zero values. The shape of the resulting distribution will depend on how big x is compared to the constant 1. Therefore the shape of the resulting distribution depends on the units in which x was measured. In the C-peptide AUC mean situation, all transformations are similar at the higher level of x (mean = 0.04 at month 24), but at the lower value level of x (mean = 0.01 at month 12), sqrt(x) is better than log(x+1) and log(x+1) is better than ln(x). This can be easily understood from the curvature of transformations shown in the following graph.



Some additional notes about the use of log(x+1) transformation:
  • Any base for the logarithm can be used, but base 10 is often used because of interpretability
  • In addition to log(x+1), log(2x+1) or log(x+3/8) transformation may also be used
  • Remember to re-inspect the data after transformation to confirm its suitability. This will also be true no matter which data transformation approach is used.

Sunday, May 06, 2012

Is Last Observation Carried Forward (LOCF) a dead approach to be used?


“To cope with situations where data collection is interrupted before the predetermined last evaluation timepoint, one widely used single imputation method is Last Observation Carried Forward (LOCF). This analysis imputes the last measured value of the endpoint to all subsequent, scheduled, but missing, evaluations. “ For a study with clinical outcomes measured at multiple timepoints (repeated measures), if the endpoint analysis approach is used for the primary efficacy variable, the most convenient and easy-to-understand imputation method is LOCF. In endpoint analysis, the change from the baseline to the last measurement (at a fixed timepoint such as at one year, at two year) is the dependent variable.
the LOCF is the easiest imputation approach for missing data to be understood by the non-statisticians. However, the LOCF approach has been the target for criticisms from the statisticians for  its lack of  a sound statistical foundation and for its biases in either direction (i.e., it is not necessarily conservative). After the National Academies published its draft report "The prevention and treatment of missing data in clinical trials”, using LOCF approach seemed to be out-dated and markedly out of step with modern statistical thinking.

Is the LOCF dead? Can we still use this approach in some situations in some clinical trials?

In reviewing some of the regulatory guidance, I believe that the LOCF is not totally dead. While we acknowledge that the LOCF is not a perfect approach, the LOCF approach should not be totally abandoned. In some situations, the LOCF approach is commonly agreed to be a more conservative approach and may be appropriate to be used.

In EMEA’s guidance "Guideline on missing data in confirmatory clinical trials", opinions and examples about the use of LOCF was explained:

Only under certain restrictive assumptions does LOCF produce an unbiased estimate of the treatment effect. Moreover, in some situations, LOCF does not produce conservative estimates. However, this approach can still provide a conservative estimate of the treatment effect in some circumstances.
To give some particular examples, if the patient’s condition is expected to deteriorate over time (for example in Alzheimer’s disease) an LOCF analysis is very likely to give overly optimistic results for both treatment groups, and if the withdrawals on the active group are earlier (e.g. because of adverse events) the treatment comparison will clearly provide an inappropriate estimate of the treatment effect and may be biased in favour of the test product. Hence in this situation an LOCF analysis is not considered appropriate. Indeed in Alzheimer’s disease, and other indications for diseases that deteriorate over time, finding a method that gives an appropriate estimate of the treatment effect will usually be difficult and multiple sensitivity analyses will frequently be required.
However, in other clinical situations (e.g. depression), where the condition is expected to improve spontaneously over time, LOCF (even though it has some sub-optimal statistical properties) might be conservative in the situations where patients in the experimental group tend to withdraw earlier and more frequently. Establishing a treatment effect based on a primary analysis which is clearly conservative represents compelling evidence of efficacy from a regulatory perspective.

Some of the FDA’s guidance gave clear instructions on the use of the LOCF approach in handling the missing data, which is more surprising to me.

In FDA’s guidance on “Diabetes Mellitus: DevelopingDrugs and Therapeutic Biologics for Treatment and Prevention”, I am surprised to see that the LOCF approach is actually suggested even though the LOCF approach may not be a conservative approach as indicated in the statements below since the HbA1c is expected to increase if the experimental drug is effective.
Although every reasonable attempt should be made to obtain complete HbA1c data on all subjects, dropouts are often unavoidable in diabetes clinical trials. The resulting missing data problems do not have a single general analytical solution. Statistical analysis using last observation carried forward (LOCF) is easy to apply and transparent in the context of diabetes trials. Assuming an effective investigational therapy, it is often the case that more placebo patients will drop out early because of a lack of efficacy, and as such, LOCF will tend to underestimate the true effect of the drug relative to placebo providing a conservative estimate of the drug’s effect. The primary method the sponsor chooses for handling incomplete data should be robust to the expected missing data structure and the time-course of HbA1c changes, and whose results can be supported by alternative analyses. We also suggest that additional analyses be conducted in studies with missing data from patients who receive rescue medication for lack of adequate glycemic control. These sensitivity analyses should take account of the effects of rescue medication on the outcome.

In FDA’s guidanceDeveloping Products for Weight Management”, the LOCF is also suggested even though it also says ‘repeated measures analyses can be used to analyze longitudinal weight measurements but should estimate the treatment effect at the final time point.

The analysis of (percentage) weight change from baseline should use ANOVA or ANCOVA with baseline weight as a covariate in the model. The analysis should be applied to the last observation carried forward on treatment in the modified ITT population defined as subjects who received at least one dose of study drug and have at least one post-baseline assessment of body weight. Sensitivity analyses employing other imputation strategies should assess the effect of dropouts on the results. The imputation strategy should always be prespecified and should consider the expected dropout patterns and the time-course of weight changes in the treatment groups. No imputation strategy will work for all situations, particularly when the dropout rate is high, so a primary study objective should be to keep missing values to a minimum. Repeated measures analyses can be used to analyze longitudinal weight measurements but should estimate the treatment effect at the final time point. Statistical models should incorporate as factors any variables used to stratify the randomization. As important as assessing statistical significance is estimating the size of the treatment effect. If statistical significance is achieved on the co-primary endpoints, type 1 error should be controlled across all clinically relevant secondary efficacy endpoints intended for product labeling.

This guidance was criticized by academics for several issues including the use of LOCF approach for the primary efficacy analyses. In comments submitted by UAB and Duke, there were the following statements:

While we strongly agree with the use of ITT approaches, we believe that the use of last observation carried forward (LOCF) is markedly out of step with modern statistical thinking. This perhaps reflects the fact that the 2004 FDA advisory meeting addressing this topic did not include a statistician with clinical trial expertise. A review of the video tapes referred to above will show that several leading statisticians all eschewed LOCF and suggested alternatives. These alternatives are now well established2 and available in major statistical packages. We have a paper nearing completion that compares the performance of these various approaches in multiple real obesity trials and will be glad to share a copy with FDA upon request. LOCF does not have a sound statistical foundation and can be biased in either direction (i.e., it is not necessarily conservative). Our own work suggests that multiple imputation may be the best method for conducting ITT analyses in obesity trials and that standard mixed models also work quite well in reasonably sized studies.

However, An advisory committee meet material for NDA 022580 for QNEXA in 2012 indicated that the LOCF approach is used for the primary efficacy analyses for weight management product developments.

When the outcome variable is dichotomous (success/failure, responder/non-responder,…), the LOCF is more acceptable if any subject who withdraw from the study early is considered as treatment failure or non-responder. This approach may also be called ‘the treatment failure imputation’, which is the most conservative approach. This approach is suggested in FDA Draft Guidance on Tacrolimus. In a recent NDA submission ( 202-736/N0001 Sklice (Ivermectin), topical cream, 0.5%, augmented Treatment of head lice infestations), this approach is also used in handling the missing data for primary efficacy analysis.

In the end, there is no perfect imputation approach if the missing data occurs too often. During the clinical trial from the study design to protocol compliance, to the data collection, every effort should be made to minimize the missing data. I think that the statements about the handling of missing data is pretty clear and reasonable in FDA’s Draft Guidance for Industry and Food and Drug Administration Staff - The Content of Investigational Device Exemption (IDE) and Premarket Approval (PMA) Applications for Low Glucose Suspend (LGS) Device Systems

Handling of Missing Data
Starting at the study design stage and throughout the clinical trial, every effort should be made to minimize patient withdrawals and lost to follow-ups. Premature discontinuation should be summarized by reason for discontinuation and treatment group. For an ITT population, an appropriate imputation method should be specified to impute missing HbA1c and other primary endpoints in the primary analysis. It is recommended that the Sponsor/Applicant plan a sensitivity analysis in the protocol to evaluate the impact of missing data using different methods, which may include but is not limited to per protocol,
Last Observation Carry Forward (LOCF)
, multiple imputation, all missing as failures or success, worst case scenario, best case scenario, tipping point, etc.