In last 7-8 years, there is a gradual shift from using the single-imputation method such as LOCF to MMRM (mixed model for repeated measurement) especially after the PhRMA initiative published the paper "Recommendations for the Primary Analysis of Continuous Endpoints in Longitudinal Clinical Trials” in 2008 and National Research Council (NAS) published its report “Prevention & Treatment of Missing Data in Clinical Trials” in 2010. MMRM approach handling missing data do not employ formal imputation. MMRM analysis made use of all available data, including subjects with partial data (i.e., with missing data) in order to arrive at an estimate of the mean treatment effect without filling in the missing items.
The use of MMRM is based on the assumption of
missing at random (MAR) and the assumption that dropouts would behave similarly
to other patients in the same treatment group, and possibly with similar covariate
values, had they not dropped out. While MAR seems to be a comprise between the
missing completely at random (MCAR) and the missing not at random (MNAR), MAR assumption as the basis of using MMRM has been scrutinized or criticized in two recently FDA advisory committee
meetings.
In a paper by Dr. Lisa LaVange “The Role of Statistics in
Regulatory Decision Making”, these two advisory
committee meetings were mentioned as examples how FDA statisticians helped in
dealing with the interpretation of the results from MMRM analysis.
“When a patient discontinues therapy for intolerability, however, such an assumption may not be reasonable. More important, the estimated treatment effect will be biased in this case, leaving regulators with some uncertainty as to what information should be included in the product label, if approved.”
In both advisory committee meetings she cited,
FDA statisticians challenged the assumptions of using the MMRM in handling the
missing data.
The first advisory committee
meeting is Jan 30, 2014 meeting of Pulmonary-Allergy Drugs Advisory Committee
to review the New
Drug application (NDA) from Pharmaxis Ltd. seeking approval of mannitol
inhalation powder for Cystic
Fibrosis (CF)
“In January 2013, an advisory committee meeting was held to review mannitol inhalation powder for the treatment of cystic fibrosis, and missing data issues were an important part of the discussion. Results in favor of mannitol were highly significant in one phase 3 study, but differential dropout rates were observed, indicating that some patients receiving mannitol could not tolerate therapy. The prespecified primary analysis involved a comparison between treatment groups of the average outcome across visits in the context of an MMRM. This analysis was flawed in the presence of informatively missing data, but a sensitivity analysis using baseline values in lieu of missing observations was significantly in favor of mannitol. Citing the NAS report, committee members raised questions about the usefulness of this analysis during the committee’s discussion. The statistical review team provided alternative sensitivity analyses based on empirical distribution functions that appropriately addressed the tolerability issue and provided clarity to the discussion.”
The
second advisory commettee meeting is September
10, 2013 Meeting of the Pulmonary-Allergy Drugs Advisory Committee to
review GSK’s new drug application (NDA) 203975 umeclidinium and vilanterol
powder for inhalation in treating COPD
“At a second advisory committee meeting held in September 2013 to discuss umeclidinium and vilanterol inhalation powder for the treatment of chronic obstructive pulmonary disease, differential dropout rates were observed in the phase 3 studies, with more placebo patients discontinuing due to lack of efficacy compared with those treated with the investigational drug. The statistical reviewer presented a sensitivity analysis using a ‘‘jump to reference’’ method that assumed any patient discontinuing treatment early would behave similarly to placebo patients post discontinuation, arguing that such an assumption was reasonable given the drug’s mechanism of action. The results of this and other analyses helped inform committee members about the impact of missing data on the primary results and also helped focus the discussion on the importance of not only how much data were missing but the reasons why and the way in which postdiscontinuation data were incorporated in the analysis”
the Transcript
for the September 10, 2013 Meeting of the Pulmonary-Allergy Drugs Advisory
Committee (PADAC) detailed the discussions of the issue with the
application of MMRM.
“Given the large amount of patient dropout in the primary efficacy studies, it is important to consider the potential effect of missing data on the reliability of efficacy results. Exploratory analyses showed that patients who dropped out on the active treatment arms tended to show benefit over placebo with respect to FEV1 prior to withdrawal. The primary MMRM model assumes that data after dropout are missing at random. Therefore, if the interest is in the effectiveness of the treatment assignment in all randomized subjects, regardless of adherence, i.e., the intention-to-treat estimand, then the primary analysis assumes that patients who dropped out went out to maintain that early treatment effect, even after treatment discontinuation. This assumption is not scientifically plausible because bronchodilators are symptomatic and not disease-modifying treatments, and thus any effect of the treatment will go away within a few days of stopping therapy. Therefore, a sensitivity analysis to evaluate the intention-to-treat estimand should not assume that any early treatment effect was maintained through 24 weeks in patients who prematurely stopped treatment. The sensitivity analysis carried out by the applicant that we considered most reasonable was a Jump to Reference (J2R) multiple amputation approach. The assumptions of this approach in comparison to those of the MMRM are illustrated in this figure, which displays hypothetical data for patients dropping out after week eight.
Average FEV1 values over time are displayed by circles for patients on UMEC/VI and by triangles for patients on placebo. The red lines display observed data prior to dropout, illustrating an early treatment benefit. The green lines display the trends in pulmonary function after dropout that are assumed by the MMRM model, i.e., an assumption that the early benefit was maintained throughout the 24 weeks. The blue lines display the trends assumed by the Jump to Reference sensitivity analysis. This analysis, like the MMRM, assumes that placebo patients continued on the trend observed prior to dropout. However, unlike the MMRM, the Jump to Reference (J2R) approach multiply imputes missing data in patients on active treatment under the assumption that any treatment effect observed prior to dropout would have gone away by the time of the next visit. In other words, the assumption is that pulmonary function in these patients after dropout tends to look like that observed in patients on placebo.
The results of this sensitivity analysis approach as compared to those of the primary analysis are shown in this table for Study 373. In all relative treatment comparisons, statistical significance was maintained in the sensitivity analysis. However, estimated magnitudes of treatment effects were approximately 20 to 30 percent smaller in the sensitivity analyses than in the primary analyses. For example, the estimated effect size relative to placebo for UMEC/VI at the proposed 62.5/25 dose was about 0.13 liters, as compared to 0.17 liters in the primary analysis. Notably, all sensitivity analyses to address the missing data are based on untestable assumptions about the nature of the unobserved data. “
The caveat in using the MMRM approach is also mentioned in EMA's "GUIDELINE ON MISSING DATA IN CONFIRMATORY CLINICAL TRIALS" released in 2009. The guideline mentioned that different type of variance-covariance matrix for MMRM model and assumptions to model the un-observed measurements could lead to different conclusions. it suggested that "the precise option settings must be fully justified and predefined in advance in detail, so that the results could be replicated by an external analyst"
"The methods above (e.g. MMRM and GLMM) are unbiased under the MAR assumption and can be thought of as aiming to estimate the treatment effect that would have been observed if all patients had continued on treatment for the full study duration. Therefore, for effective treatments these methods have the potential to overestimate the size of the treatment effect likely to be seen in practice and hence to introduce bias in favour of experimental treatment in some circumstances. In light of this the point estimates obtained can be similar to those from a complete cases analysis. This is problematic in the context of a regulatory submission as confirmatory clinical trials should estimate the effect of the experimental intervention in the population of patients with greatest external validity and not the effect in the unrealistic scenario where all patients receive treatment with full compliance to the treatment schedule and with a complete follow-up as per protocol. The appropriateness of these methods will be judged by the same standards as for any other approach to missing data (i.e. absence of important bias in favour of the experimental treatment) but in light of the concern above, the use of only these methods to investigate the efficacy of a medicinal product in a regulatory submission will only be sufficient if missing data are negligible. The use of these methods as a primary analysis can only be endorsed if the absence of important bias in favour of the experimental treatment can be substantiated"
When the study endpoint is continuous and measured
longitudinally, if MMRM is used, the assumptions for using MMRM may be
challenged. Some good practices in using MMRM may be: 1) always thinking about the assumptions
for the use of MMRM; 2) using more than one imputation approaches for
sensitivity analyses; 3) considering using the most conservative imputation
approach such as J2R.