Friday, December 23, 2022

Assessing potential unblinding due to imbalance in side effects through exit questionnaires

The critical features of the RCTs include '(concurrent) control', 'randomization', and 'blinding'. These clinical trial features can minimize the conscious or unconscious biases in the endpoint assessments including both the efficacy and safety assessments.
 
Blinding is a procedure in which one or more parties in a trial are kept unaware of which treatment arms participants have been assigned to, i.e. which treatment was received. On the contrary, unblinding is the process by which the treatment allocation is broken so that one or more parties in a trial become aware of the treatment arms the trial participants are on. To maintain the integrity of the RCT trial, the unblinding should occur only after the completion of the study and after the clinical database has been locked (no further modifications to the clinical data) or only in some special situations (unblinding for data monitoring committee, unblinding of the individual trial participants for SUSAR reporting,...). A couple of previous posts discussed exactly the same issue. 

Blinding is an important aspect of any trial. How a trial was blinded should be accurately recorded in order to allow readers to interpret the results of a study. If blinding is ever broken during a trial on individual participants, it needs to be justified and explained.

To implement and maintain the blinding (or treatment concealment), the clinical trial materials need to be manufactured and packaged in the same way. The control group (typically placebo) will have the same specifications as the testing product in shape, size, color texture, weight, taste, and smell. 

The most difficult part of maintaining the blinding is that the testing product and the control product (usually placebo) may have different side effects. The clinical trial participants and the investigators may be able to guess which treatment group the participants are on. It is quite challenging to defend the integrity of the blinding if two treatment groups do have different side effects or adverse event profiles. 

Amylyx recently ran into this issue and successfully defended the integrity of the blinding for their pivotal study (CENTAUR trial) that was published in NEJM. CENTAUR trial results were the basis for Amylyx's NDA submission. In a rare occurrence, FDA organized two external committee meetings by the Peripheral and Central Nervous System Drugs Advisory Committee (PCNS). 

In the first PCNS adcom meeting on March 30, 2022, the integrity of maintaining the blinding was raised:
"The potential for diarrhea and bitter taste were described in the informed consent, which may have alerted the patients to these symptoms and could have led to functional unblinding during the study. These are potential review issues we have identified which contribute to the uncertainty of the results."
In the second PCNS adcom meeting on September 7, 2022, the sponsor addressed the potential unblinding issue in their briefing book:


The sponsor concluded:
"Based on an exit questionnaire performed at the end of the randomized phase, it asked investigators and participants what treatment arm they were assigned to. Neither study investigators nor participants were able to guess the treatment assignment. The active group was not able to guess their treatment assignment any better than chance, indicating that taste and GI adverse events were not leading to unblinding."
Assessing the blinding through exit questionnaires can be risky though. As described in my previous post "Is blinded study really blinded? - assessment of blinding /unblinding in clinical trials":
"Ideally, in a double-blind trial, it is a good practice to evaluate for both the subjects and investigators whether or not blinding / masking has been preserved. However, in the real world, it is rare in double-blinded clinical trials to include a formal assessment of how well the blinding has been preserved. If the assessment of blinding becomes a routine, I think that many studies will show that subjects/investigators guessed correctly more frequently than they should have done by chance alone. Part of the reason this assessment has not been done often is perhaps the difficulty to explain the study results if the blinding is found to be compromised. It will be extremely difficult to assess the magnitude of the impact on the safety and efficacy evaluation if the blinding/treatment assignment concealment is compromised."

Thursday, December 01, 2022

Sample size re-estimation or sample size increase?

Recently, a press release from a biotech company caught my eye. This seems to be an example of the adaptive design with sample size re-estimation, however, it is unusual that the sample size is decreased as usually the sample size re-estimation results in an increase in sample size. 
Bellerophon Therapeutics, Inc. (Nasdaq: BLPH) (“Bellerophon” or the “Company”), a clinical-stage biotherapeutics company focused on developing treatments for cardiopulmonary diseases, announced today that the U.S. Food and Drug Administration (FDA) has accepted the Company’s proposal to reduce the study size for its ongoing registrational REBUILD Phase 3 trial of INOpulse® for the treatment of fibrotic Interstitial Lung Disease (fILD). The new study size of 140 subjects does not impact the trial’s principal objective or endpoints and maintains power of >90% (p-value < 0.01) for the primary endpoint of Moderate to Vigorous Physical Activity (MVPA) based on the effect size observed in Phase 2.

Following the evaluation of baseline MVPA characteristics, as measured by actigraphy, compliance to treatment and review of safety data of the randomized subjects in the ongoing Phase 3 REBUILD study, the trial’s independent Data Monitoring Committee (DMC) supported reducing the target study size from 300 to 140 subjects.
Sample size re-estimation is one type of adaptive design where the sample size can be adjusted during the study based on a prespecified rule. Sample size re-estimation has its special features:  

Group Sequential Design (GSD) and Sample Size Re-estimation

Clinical trials with adaptive design can be in different forms depending on what the adaptations are. Two commonly utilized adaptive designs are group sequential design (GSD) and sample size re-estimation (SSR). Implementation of both GSD and SSR is through the interim analyses conducted by the independent data monitoring committee. In GSD studies, we set a large sample size and hope to stop the trial early due to the overwhelming efficacy, futility, or safety at the interim analyses. In adaptive design with SSR, we start with a small study and possibly increase the sample size post an interim analysis. Both GSD and SSR can achieve the same benefits of reduced sample size and potentially an earlier conclusion. 

Blinded Sample Size Re-estimation and Unblinded Sample Size Re-estimation

In FDA guidance "Adaptive Designs for Clinical Trials of Drugs and Biologics", sample size re-estimation was described in section B "adaptations to the sample size". Blinded sample size re-estimation is based on interim estimates of nuisance parameters such as the standard deviation for continuous outcome measure and overall event rate for discreet outcome measure. The unblinded sample size re-estimation is a type of adaptive design where adaptation is to prospectively plan modifications to the sample size based on comparative interim results. Blinded sample size re-estimation may be conducted by the sponsor statistician while unblinded sample size re-estimation must be through an independent data monitoring committee.  

Sample Size Re-estimation and Sample Size Increase

In clinical trials with prospectively planned sample size re-estimation, the sample size is usually increased. It is very rare that the sample size is decreased after the interim analysis. For adaptive clinical trials with adaptation on sample size (i.e., sample size re-estimation), the initial sample size estimation can be based on more aggressive assumptions that result in a smaller sample size. In the middle of the study, interim analyses are performed and the decision can be made (by independent DMC and through a prespecified rule) whether or not the sample size should be increased. 

In FDA's guidance discussing the Adaptations to Sample Sizes, while the terms 'sample size re-estimation' and 'sample size adaptation' are used, the sample size increase is really implied. 

Sample Size Adaptation and Sample Size Increase by a Fixed Number

The sample size re-estimation or sample size adaptation is really a binary decision. If the decision is to increase the sample size (after the interim analysis), the sample size will be increased by a pre-specified, fixed number, not increased by a number that is based on the observed treatment effect at the interim analysis. 

If the sample size is increased by a very exact level calculated from the observed treatment effect at the interim analysis to bring the conditional power up to a target level, there is a potential to reverse calculate the effect size or to at least make an educated guess about what the effect size is from the interim analysis

This potential for an educated guess about the effect size is a huge issue from the regulatory point of view. This specific concern is discussed in FDA's guidance ""Adaptive Designs for Clinical Trials of Drugs and Biologics".

Finally, there are additional challenges in maintaining trial integrity in the presence of sample size adaptations. For example, sample size modification rules are often based on maintaining the conditional probability of a statistically significant treatment effect at the end of the trial (often called the conditional power) at or near some desired level. In this scenario, knowledge of the adaptation rule and the adaptively chosen sample size allows a relatively straightforward back-calculation of the interim estimate of treatment effect. Therefore, additional steps should be taken to limit personnel with this detailed knowledge so that trial integrity can be maintained.