Sunday, January 17, 2021

Arithmetic mean, geometric mean, harmonic mean, least square mean, and trimmed mean

In statistics, a central tendency is a central or typical value for data distribution. Mean (or average) is commonly used to measure the central tendency. However, depending on the data distribution or the special situation, different types of Mean may be used: arithmetic mean, geometric mean, least-squares mean, harmonic mean, and trimmed mean.

The most common Mean is the arithmetic mean. If we say ‘Mean’, it is the default for arithmetic mean.

Arithmetic Mean is calculated as the sum of all measurements (all observations) divided by the number of observations in the data set.


Geometric Mean is the nth root of the product of the data values, where there are n of these. This measure is valid only for data that are measured absolutely on a strictly positive scale. Geometric mean is often used in the data that follows the log-normal distribution (for example, the pharmacokinetics drug concentration data, the antibody titer data...). 

In practice, geometric mean is usually calculated with the following three steps:
  • log-transform the original data
  • calculate the arithmetic mean of the log-transformed data
  • back transform the calculated value to the original scale
Harmonic Mean is the reciprocal the arithmetic mean of the reciprocals of the data values. This measure too is valid only for data that are measured absolutely on a strictly positive scale.

The harmonic mean are calculated with the following steps:

  • Add the reciprocals of the numbers in the set. To find a reciprocal, flip the fraction so that the numerator becomes the denominator and the denominator becomes the numerator. For example, the reciprocal of 6/1 is 1/6.
  • Divide the answer by the number of items in the set.
  • Take the reciprocal of the result.
The harmonic mean is not often used in day-to-day statistics but is quite often used in some statistical formula. For example, for two-group t-statistics with unequal sample size in two groups, the t value can be calculated using the following formula with harmonic mean to measure the average sample size.


Least Squares Mean is a mean estimated from a linear model. Least squares means are adjusted for other terms in the model (like covariates), and are less sensitive to missing data. Theoretically, they are better estimates of the true population mean.

In a previous post "Least squares means (marginal means) vs. means", the calculation of least squares mean is compared with the arithmetic mean.
In analyses of clinical trial data, the least-squares mean is more frequently used than the arithmetic mean since it is calculated from the analysis model (for example, analysis of variance, analysis of covariance,...). The difference between two least-squares means is called the ratio of geometric least-squares means (or geometric least-squares mean ratio) - along with its 90% confidence intervals - is the common approach for assessing the bioequivalence. 

Trimmed Mean may also be called truncated mean and is the arithmetic mean of data values after a certain number or proportion of the highest and/or lowest data values have been discarded. The data values to be discarded can be one-sided or two-sided. 

The key for trimmed mean calculation is to determine the percentage of data to be discarded and whether or not the data to be discarded is one-sided or two-sided. The percentage of data to be discarded may be tied to the percentage of missing data. 

Trimmed mean can be calculated and then used to fill in the missing data - a single imputation method for handling the missing data. Trimmed mean as a single imputation method for missing data has its limitations, but it is still used in analyses of clinical trials - usually for sensitivity analyses.

In ICH E9-R1 "Addendum on Estimands and Sensitivity Analysis in Clinical Trials" training material, about the composite strategy to handle the intercurrent event, trimmed mean is mentioned to be an approach in handling the intercurrent event. 
 


Monday, January 11, 2021

Single Imputation Methods for Missing Data: LOCF, BOCF, LRCF (Last Rank Carried Forward), and NOCB (Next Observation Carried Backward)

The missing data is always an issue when analyzing the data from clinical trials. The missing data handling has been moved toward the model-based approaches (such as multiple imputation and mixed model repeated measures (MMRM)). The single imputation methods, while being heavily criticized and cast out, remain as practical approaches for handling the missing data, especially for sensitivity analyses.

Single imputation methods replace a missing data point by a single value and analyses are conducted as if all the data were observed. The single value used to fill in the missing observation is usually coming from the observed values from the same subject - Last Observation Carried Forward (LOCF), Baseline Observation Carried Forward, and Next Observation Carried Backward (NOCB, the focus of this post). The single value used to fill in the missing observation can also be derived from other sources: Last Rank Carried Forward (LRCF), Best or Worst Case Imputation (assigning the worst possible value of the outcome to dropouts for a negative reason (treatment failure) and the best possible value to positive dropouts (cures)), Mean value imputation, trimmed mean,…Single imputation approaches also include regression imputation, which imputes the predictions from a regression of the missing variables on the observed variables; and hot deck imputation, which matches the case with missing values to a case with values observed that is similar with respect to observed variables and then imputes the observed values of the respondent.

In this post, we discussed the single imputation method of LOCF, BOCF, LRCF, and NOCB (the focus of this post). 

Last Observation Carried Forward (LOCF): A single imputation technique that imputes the last measured outcome value for participants who either drop out of a clinical trial or for whom the final outcome measurement is missing. LOCF is usually used in the longitudinal study design where the outcome is measured repeatedly at pre-specified intervals. LOCF usually requires there is at least one post-baseline measure. The LOCF is the widely used single imputation method.

Baseline Observation Carried Forward (BOCF): A single imputation technique that imputes the baseline outcome value for participants who either drop out of a clinical trial or for whom the final outcome measurement is missing. BOCF is usually used in a study design with perhaps only one post-baseline measure (i.e., the outcome is only measured at the baseline and at the end of the study).

Last Rank Carried Forward (LRCF): The LRCF method carries forward the rank of the last observed value at the corresponding visit to the last visit and is the non-parametric version of LOCF. However, unlike the LOCF that is based on the observation from the same subject, for the LRCF method, the ranks come from all subjects with non-missing observations at a specific visit.  From the early visits to the later visits, the number of missing values will be different, the constant ranking, carried forward, and re-ranking will be needed. Here are some good references for LRCF:

LRCF is thought to have the following features:

In a paper by Jing et al, the LRCF was used for missing data imputation: 

"...The last rank carried forward or last observation carried forward was assigned to patients who withdrew prematurely from the study or study drug for other reasons or who did not perform the 6-minute walk test for any reason not mentioned above (eg, missed visit), provided that the patient performed at least 1 postbaseline 6-minute walk test.
Next Observation Carried Backward (NOCB): NOCB is a similar approach to LOCF but works in the opposite direction by taking the first observation after the missing value and carrying it backward. NOCB may also be called Next Value Carried Backward (NVCB) or Last Observation Carried Backward (LOCB).

NOCB may be useful in handling the missing data arising from the external control group, from Real-World Data (RWD), Electronic health records (EHRs) where the outcome data collection is usually not structured and not according to the pre-specified visit schedule. 

I can foresee that the NOCB may also be an approach in handing the missing data due to the COVID-19 pandemic. Due to the COVID-19 pandemic, subjects may not be able to come to the clinic for the outcome measure at the end of the study. The outcome measure may be performed at a later time beyond the visit window allowance. Instead of having a missing observation for the end of the study visit, the NOCB approach can be applied to carry the next available outcome measure backward. 

The NOCB approach, while not popular, can be found in some publications and regulatory approval documents. Here are some examples: 


In an article by Wyles et al (2015, NEJM) Daclatasvir plus Sofosbuvir for HCV in Patients Coinfected with HIV-1, "Missing response data at post-treatment week 12 were inferred from the next available HCV RNA measurement with the use of a next-value-carried-backward approach."

In BLA 761052 of Brineura (cerliponase alfa) Injection Indication(s) for Late-Infantile Neuronal Ceroid Lipofuscinosis Type 2 (CLN2)- Batten Disease, the NOCB was used to handle the missing data for comparison to the data from a natural history study. 

Because intervals between clinical visits vary a lot in Study 901, the agency recommended performing analyses using both the last available Motor score and next observation carried backward (NOCB) for the intermediate data points although the former one is determined as the primary. 

In FDA Briefing Document for Endocrinologic and Metabolic Drugs Advisory Committee Meeting for NDA 210645, Waylivra (volanesorsen) injection for the treatment of familial chylomicronemia syndrome, NOCF was used as one of the sensitivity analyses:

Similar planned (prespecified) analyses using different variables, such as slightly different endpoint definitions (e.g. worst maximum pain intensity versus average maximum pain intensity), or imputation methods for missing data (next observation carried backward versus imputation of zero for missing values) did not demonstrate treatment differences.

 Missing values were pre-specified to be imputed using Next Observation Carried Back (NOCB); i.e., if a patient did not complete the questionnaire for several weeks, the next value entered was assumed to have occurred during all intervening (missing) weeks.

 Missing data for any post-baseline visit will be imputed by using Next Observation Carried Back (NOCB) if there is a subsequent score available. Missing data after the last available score of each patient will not be imputed.

in NDA 212157 of Celecoxib Oral Solution for Treatment of acute migraine, the NOCB was used for sensitivity analysis

Headache Pain Freedom at 2 hours - Sensitivity Analysis

To analyze the missing data for the primary endpoint, Dr. Ling performed an analysis analyzing patients who took rescue medications as nonresponders and then also imputing missing data at the 2-hour time point using the next available time point of information (Next Observation Carried Backward (NOCB)) or a worst-case type of imputation (latter not shown in table).

Single imputation methods are generally not recommended for the primary analysis because of the following disadvantages (issues): 

  • Single imputation usually does no provides an unbiased estimate
  • Inferences (tests and confidence intervals) based on the filled-in data can be distorted by bias if the assumptions underlying the imputation method are invalid
  • Statistical precision is overstated because the imputed values are assumed to be true.
  • Single imputation methods risk biasing the standard error downwards by ignoring the uncertainty of imputed values. Therefore, the confidence intervals for the treatment effect calculated using single imputation methods may be too narrow and give an artificial impression of precision that does not really exist.  
  • the single imputation method such as LOCF, NOCB, and BOCF do not reflect MAR (missing at random) data mechanisms.

Further Readings:

Monday, January 04, 2021

Synthetic Control Arm (SCA), External Control, Historical Control

Lately, the term 'synthetic control' or 'synthetic control arm' or SCA, in short, is becoming popular - it is mainly driven by the desire to design more efficient clinical trials that are not traditional, the golden standard RCT (randomized controlled trials) with a concurrent control group. 

In a previous post, I compared historical control versus external control in clinical trials. The subtle difference is mainly in the time element. Historical control is one type of external control, but the reverse is not true. External control can be historical control or contemporaneous control. For example, in a clinical trial to assess the efficacy and safety of the donor lung preserved using ex-vivo lung perfusion (EVLP) technique, the EVLP lung transplantation cohort was compared to a contemporaneous (not concurrent) control cohort that was formed through the matched control from the traditional lung transplantation patients.   

Then what is 'synthetic control' or 'synthetic control arm'?

Synthetic control arm is the use of synthetic data as a control arm in clinical trials. According to an article "Synthetic data in the civil service" in the latest issue of SIGNIFICANCE, synthetic data is defined as "artificially generated data that are modelled on real data, with the same structure and properties as the original data, except that they do not contain any real or specific information about individuals. The goal of synthetic data generation is to create a realistic copy of the real data set, carefully maintaining the nuances of the original data, but without compromising important pieces of personal information."

Synthetic control arm is a control arm generated through existing data resources representing normal patient statistics. Synthetic control arm can serve as a comparator for a single-arm clinical trial or augment the smaller concurrent control group (for example with active:control ratio of 3:1 or 4:1) in RCTs. 

In a presentation by at Harvard Medical School Executive Education Webinar Series,  Mr. Chatterjee presented "Synthetic Control Arms in Clinical Trials and Regulatory Applications" and he defined the 'synthetic control arm' as the following:

In a paper by Thorlund et al "Synthetic and External Controls in Clinical Trials – A Primer for Researchers", they stated that synthetic control arms are external control arms - two terms can be used interchangeably:
External control arms are also called “synthetic” control arms as they are not part of the original concurrent patient sample that would have been randomized into the experimental or the control treatment arms as in a traditional RCT. External controls can take many forms. For example, external control arms can be established using aggregated or pooled data from placebo/control arms in completed RCTs or using RWD (Real World Data) and pharmacoepidemiological methods. Pooled data from historical RCTs can serve as external controls depending on the availability of selected “must have” data, similarity of patients, recency and relevancy of experimental treatments that were tested, availability and similarity of relevant endpoints (eg, operational definitions and assessments), and similarity of other important study procedures that were conducted in these historical trials. It is important to note that using control data from historical RCTs still results in a nonrandomized comparison but has the advantage of standardized data collection in a trial setting and patients who enroll in clinical trials may have more similar characteristics than those who do not.

However, I think that there are subtle differences between these two terms. With 'synthetic' control arms, the term 'synthetic' implies there are some selection, manipulation, derivation, matching, pooling, borrowing from the source data. Just like the meta-analysis is also called research synthesis and requires the statistical approaches to combine the results from multiple scientific studies, the 'synthetic' control also requires the use of statistical approaches to process the data from multiple sources to form a control group to replace the concurrent control in traditional RCT clinical trials. 

The source data for constructing synthetic control can be the data from previous RCT clinical trials, real-world data, registry data, data from natural history studies, electronic health records, ... The source data must be the subject-level data, not the summary or aggregate data. 

ICH E10 "CHOICE OF CONTROL GROUP AND RELATED ISSUES IN CLINICAL TRIALS" included "External Control (including Historical Control)" as one of the options as the control groups in clinical trials. The external control here is not the same as synthetic control. 

1.3.5 External Control (Including Historical Control)
An externally controlled trial compares a group of subjects receiving the test treatment with a group of patients external to the study, rather than to an internal control group consisting of patients from the same population assigned to a different treatment. The external control can be a group of patients treated at an earlier time (historical control) or a group treated during the same time period but in another setting. The external control may be defined (a specific group of patients) or nondefined (a comparator group based on general medical knowledge of outcome). Use of this latter comparator is particularly treacherous (such trials are usually considered uncontrolled) because general impressions are so often inaccurate. So-called baseline controlled studies, in which subjects' status on therapy is compared with status before therapy (e.g., blood pressure, tumor size), have no internal control and are thus uncontrolled or externally controlled.  

How to Create a Synthetic Control Arm? 

The first step of creating a synthetic control arm is to harmonize the source data. The data from different sources or from different clinical trials should be standardized so that they can be used for the synthesis process. 

Various statistical approaches can be used to create a synthetic control arm. In an audiobook on synthetic control arms by Cytel, propensity scoring and Bayesian Dynamic Borrowing methods were discussed. 

The synthetic control arm can be considered as an approach of 'borrowing control' - i.e., some controls are borrowed from historical data. There are numerous options for borrowing controls: 

  • Pooling: adds historical controls to randomized controls 
  • Performance criterion: uses historical data to define performance criterion for current, treated-only trial to beat 
  • Test then pool: test if controls sufficiently similar for pooling 
  • Power priors: historical control discounted when added to randomized controls
  • Hierarchical modeling: variation between current vs. historical data is modeled in Bayesian fashion 

In the article by Thorlund et al, the pros and cons of different methods for generating synthetic control arms were discussed. 


In Mr Chatterjee presentation, "Synthetic Control Arms in Clinical Trials and Regulatory Applications", there is a diagram to describe the process for creating a synthetic control arm. 


Even though the synthetic control arms, the use of real-world data, conducting the single-arm clinical trials are very appealing, the challenges are ahead and the regulatory acceptance is uncertain. There may be limited use in special cases (such as ultra-rare diseases, pediatric clinical trials) and for post-marketing activities (such as label expansion, label modification, post-marketing studies), but not in prime time to replace the concurrent control in traditional RCTs. 

In an article at Statnews.com "Synthetic control arms can save time and money in clinical trials", 

Even with the FDA making the use of real-world data a strategic priority, synthetic control arms can’t be used across the board to replace control arms. Synthetic control arms require that the disease is predictable (think idiopathic pulmonary fibrosis) and that its standard of care is well-defined and stable. That certainly isn’t the case for every disease.

It’s also important to consider that even when information is available from real-world data sources, it may be difficult to extract or of low quality. Routinely captured health care data, such as electronic health records, are typically siloed, fragmented, and unstructured. They are also often incomplete and difficult to access. New tools and methodologies are needed to consolidate, organize, and structure real-world data to generate research-grade evidence and ensure that confounding variables are accounted for in analyses. Analytic techniques such as natural language processing and machine learning will be needed to extract relevant information from structured and unstructured data.

The same view is also expressed in a Pink Sheet article "External Control Arms: Better Than Single-Arm Studies But No Replacement For Randomization".

Synthetic control group derived from historical clinical trial data could augment smaller randomized trials and yield better information than single-arm studies, but this approach should not be viewed as a substitute for randomized trials where feasible

ADDITIONAL REFERENCES: