Monday, April 28, 2025

Vanda Pharmaceuticals Sues FDA Over Tradipitant Hearing Delay – A Misplaced Battle?

This past week, Vanda Pharmaceuticals filed a federal lawsuit accusing the FDA of unlawfully delaying a hearing on the company’s new drug application (NDA) for tradipitant in gastroparesis. According to Fierce Biotech, the dispute stems from a Complete Response Letter (CRL) the FDA issued in September 2024 rejecting tradipitant as a gastroparesis treatment. Vanda contends the agency “generally disregarded” its evidence, and is now blaming FDA bureaucracy and mass layoffs for stalling the hearing. However, an examination of the facts suggests Vanda’s legal aggression is largely misdirected. The FDA’s cautious stance appears justified by the clinical data: the pivotal Phase 3 trial failed to meet its primary endpoint, and the company is left relying on post-hoc analyses to claim efficacy, which is not a substitute for a positive trial result.

Tradipitant and Gastroparesis: The Clinical Background

Gastroparesis is a chronic stomach motility disorder often marked by severe nausea and vomiting. Vanda’s investigational drug, tradipitant (an NK1-receptor antagonist), was tested in diabetic and idiopathic gastroparesis patients. The key Phase 3 trial (ClinicalTrials.gov NCT04028492) randomized 201 adults (about half diabetic, half idiopathic) to tradipitant 85mg vs. placebo twice daily for 12 weeks. The primary endpoint was the change from baseline in daily nausea severity at 12 weeks. Secondary endpoints included other gastroparesis symptoms and patient-reported outcome measures.

The trial did not meet its primary endpoint. In the full intention-to-treat (ITT) analysis, the difference in nausea reduction between tradipitant and placebo was not statistically significant (P = .741). In fact, the overall results showed no significant improvement on nausea severity or other major symptoms. According to FDA briefing materials and Vanda’s own publications, both the primary and secondary endpoints failed to reach significance. Vanda’s press release and a Healio report acknowledge that “the drug failed to meet statistically significant change in nausea severity at 12 weeks vs. placebo”. These negative findings were the basis for the FDA’s CRL.

  • Primary endpoint missed: Tradipitant did not significantly outperform placebo on nausea reduction at 12 weeks (P = .741).
  • Secondary endpoints missed: Other symptom scores likewise showed no statistical benefit over placebo.
  • Post-hoc analyses only: Vanda performed subgroups and sensitivity analyses (e.g. patients with high blood levels of drug or excluding certain “confounders”) that showed some improvement. However, these were not pre-specified endpoints, and the official trial data remained negative.

Because the registered primary outcome was negative, the trial is conventionally considered “failed.” Vanda’s scientists point out that in post hoc subsets (for example, patients with adequate drug exposure) tradipitant appeared to reduce nausea significantly. But regulatory agencies are very clear: post-hoc or exploratory findings cannot replace a prospectively powered success. The FDA expects sponsors to present all trial data, not just cherry-picked favorable subsets. As the FDA guidance states, submissions must include “all data or information relevant to an evaluation of…effectiveness” and avoid “selecting only those sources that favor a conclusion of effectiveness.” Any conflicting evidence must be explained by a compelling scientific rationale. Vanda’s reliance on post-hoc signal is precisely the sort of selective evidence that regulators warn against.

Regulatory Standards: When One Trial Fails

FDA’s drug approval standard under the U.S. Food, Drug, and Cosmetic Act requires “substantial evidence” of effectiveness, typically meaning at least two adequate and well-controlled trials each convincing on its own. In practice, the FDA may accept a single trial only if it is exceptionally persuasive and is backed by confirmatory evidence. But in this case, tradipitant’s lone Phase 3 trial produced a non-significant primary result. Under these rules, that trial generally does not qualify as adequate evidence on its own.

Key FDA guidelines reinforce this approach:

  • FDA has long held that substantial evidence usually means two positive trials, or one trial plus “convincing” confirmatory data.
  • A negative primary outcome is usually regarded as inconclusive: experts advise that a failed primary endpoint generally warrants additional trials (with better design or power) rather than reinterpreting existing data. In fact, Pocock and Stone’s NEJM review “The Primary Outcome Fails – What Next?” (2016) emphasizes that missing a primary endpoint should prompt new studies, not substitution by post-hoc findings.
  • FDA guidance explicitly warns sponsors to present all data. Trials may be questioned if negative overall results are explained away by excluding unfavorable subsets without strong justification.

Furthermore, the FDA’s own Good Review Practices (GRPs) underscore that reviewers follow documented best practices – focusing on consistency, transparency, and rigor. These internal policies help ensure each application is handled fairly and methodically, not arbitrarily slowed. The agency pointed out that Vanda’s request involves 15,000 pages, including new analyses not in the original NDA, which justifies thorough review under GRPs.

In sum, the regulatory framework suggests that FDA’s request for additional data and its cautious timetable are not arbitrary delays but adherence to standard procedures. A single failed pivotal trial simply does not meet the substantial-evidence bar. According to these criteria, the CRL was scientifically grounded.

When a Primary Outcome Fails

Statisticians and regulators agree: if a trial misses its primary outcome, the conservative path is to consider the study negative (or at best inconclusive) and plan further research. Pocock and Stone’s NEJM review makes this clear. They argue that a non-significant primary result generally means the experiment didn’t prove efficacy; turning to retrospective subgroups or alternative endpoints without a new trial risks false-positive “findings.” In practical terms, a missed endpoint should lead to redesigning studies or confirming any hints of effect in fresh data. Simply reanalyzing the same data to “find” significance is discouraged.

FDA guidance mirrors this stance. Even when one trial shows some favorable signal, the agency demands that confirmatory evidence come from independent sources (for example, a second trial or external data). Confident conclusions require evidence that is not retrospectively cherry-picked. In Vanda’s case, the company essentially trimmed its data after the fact (excluding patients and focusing on those with higher drug exposure) to claim a positive result. By FDA and statistical standards, that strategy is insufficient. It falls short of the clear, prospectively defined success needed to approve a new treatment.

The Lawsuit and Vanda’s Claims

Vanda’s lawsuit is technically about FDA’s timing: the company alleges the FDA is unlawfully delaying a regulatory hearing on its NDA and related disputes. According to Fierce Biotech, FDA told Vanda it would not schedule the hearing until September 2025 because of the case’s complexity, the new material submitted, and even unrelated litigation and staff layoffs. Vanda responded by suing, accusing the FDA of stonewalling it and even blaming the agency’s Commissioner and hiring cuts.

In parallel, Vanda is fighting on multiple fronts: it has also challenged FDA rules on its insomnia drug, on clinical holds for other tradipitant studies, and more. The Fierce article notes that Vanda has pressed FDA to hold a hearing within 120 days of its January 2025 request, as the company believes is its right.

However, all these battles circle back to the core issue: tradipitant’s failed data. The CRL that triggered the hearing request was grounded in the negative phase 3 results. In public statements, Vanda’s CEO complained that FDA “generally disregarded” the evidence — but that “evidence” was largely the same trial data and post-hoc arguments that regulators found unconvincing. The FDA’s position is that the application needed additional study, in line with scientific norms.

A Misplaced Blame

It’s understandable that Vanda is frustrated: developing new drugs is costly, and delays eat into patents and investor patience. But blaming the FDA’s procedures overlooks the science. The data from the major gastroparesis trial simply didn’t demonstrate clear efficacy. In drug development, a failed trial is a standard setback, not usually a basis for litigation. Lawsuits over process cannot overcome the fact that the trial was negative by its own primary analysis.

Regulatory experts would likely say that calling for more trials is the correct outcome. As FDA guidance and statistical authorities emphasize, when a primary endpoint is missed, one doesn’t put lipstick on the dataset to make it pass. Instead, one either finds a new, adequately powered study design or gathers stronger confirmatory evidence. Vanda’s focus on legal maneuvers and after-the-fact data trimming comes across as deflecting from this core reality.

In the meantime, the FDA is merely following its structured review timelines and good practice guidelines. Delays stem from legitimate reasons (volume of material, outside litigation, workforce changes), not from any conspiracy to stall Vanda unfairly.

Ultimately, Vanda’s case serves as a reminder that science should drive decision-making, even amid disputes. The trial evidence—supported by FDA and statistical guidance—points to one conclusion: traditionpitant’s efficacy was not established by the trial as conducted. Vanda may see a silver lining in exploratory data, but regulators are right to hold the company to proven standards. Suing over process, in this instance, appears like a costly distraction from the real task: generating reliable clinical proof for tradipitant.

References:


Saturday, April 26, 2025

Comparison of “Serious Breach” (EMA) vs “Important Protocol Deviation” (FDA)

International Council for Harmonisation (ICH) E3 Guideline: Structure and Content of Clinical Study Reports Questions & Answers (R1) (2012) provided the definition for protocol deviation and important protocol deviation. The term "protocol violation" should be avoided and replaced with "protocol deviations" :

A protocol deviation is any change, divergence, or departure from the study design or procedures defined in the protocol. 

Important protocol deviations are a subset of protocol deviations that may significantly impact the completeness, accuracy, and/or reliability of the study data or that may significantly affect a subject's rights, safety, or well-being. For example, important protocol deviations may include enrolling subjects in violation of key eligibility criteria designed to ensure a specific subject population or failing to collect data necessary to interpret primary endpoints, as this may compromise the scientific value of the trial.

Protocol violation and important protocol deviation are sometimes used interchangeably to refer to a significant departure from protocol requirements. The word “violation” may also have other meanings in a regulatory context. However, in Annex IVa, Subject Disposition of the ICH E3 Guideline, the term protocol violation was intended to mean only a change, divergence, or departure from the study requirements, whether by the subject or investigator, that resulted in a subject’s withdrawal from study participation. (Whether such subjects should be included in the study analysis is a separate question.)

To avoid confusion over terminology, sponsors are encouraged to replace the phrase “protocol violation” in Annex IVa with “protocol deviation”, as shown in the example flowchart below. Sponsors may also choose to use another descriptor, provided that that the information presented is generally consistent with the definition of protocol violation provided above.

In adopting and implementing the ICH E3 guidelines, the European Medicines Agency (EMA) and the U.S. Food and Drug Administration (FDA) differ in certain respects. In particular, the EMA guidelines introduce a new term: “serious breach.” While the concept of a “serious breach” overlaps with that of an “important protocol deviation,” subtle distinctions remain between the two terms.

The EMA’s clinical trial protocol breach guideline (CTR Article 52) uses the term “serious breach” to mean a protocol or regulation violation with significant impact on a trial. In contrast, FDA’s protocol deviations guidance (December 2024) adopts ICH definitions and uses “important protocol deviation” for critical departures from the protocol. We compare these terms by definition, thresholds, reporting, examples, and implications.

Serious Breach (EMA/CTR)

Important Protocol Deviation (FDA/ICH)

Definition

Any deviation of the approved protocol (or EU CTR) likely to affect subject safety, rights, or data reliability to a significant degree. This is a legal term under EU CTR (Art. 52).

A subset of protocol deviations that might significantly affect data completeness, accuracy, or reliability or significantly affect a subject’s rights, safety, or well‑being. (FDA guidance aligns with ICH E3(R1)).

Criteria/Threshold


Threshold is “likely to affect to a significant degree” at least one of: safety, rights, or data integrity. The impact must be substantial (regulatory text: “significant degree”). In practice, breaches that systematically endanger participants or core data are serious.

Threshold is “might significantly affect” key trial aspects. Important deviations are those that could undermine data quality or subject welfare – for example, affecting critical‑to‑quality factors (ICH E8(R1)) such as eligibility, endpoint collection, randomization, or safety monitoring. Deviations not reaching “significant” impact are considered minor.

Reporting Obligations

Mandatory notification to regulators: The sponsor must report via CTIS (EU portal) without undue delay, and within 7 calendar days of becoming aware. The sponsor is responsible for the report (it may delegate CTIS submission). All serious breaches must be entered into the EU database (CTIS). Investigators and CROs must promptly inform the sponsor of any suspected serious breach. Failure to report can trigger GCP inspection findings.

No direct mandatory FDA notification: FDA does not require sponsors to notify FDA of individual protocol deviations. Instead, FDA recommends that investigators promptly report important deviations to the sponsor and IRB, and sponsors document them. Specifically, investigators should report all deviations to the sponsor (highlighting the important ones), and device trials must report emergency deviations to sponsor/IRB within ~5 days. Sponsors should capture important deviations in oversight and include them in regulatory submissions: FDA guidance advises sponsors to discuss important deviations in the Clinical Study Report (NDA/BLA) and list all deviations in the Study Data Tabulation Model (SDTM) (with an indicator of importance).

Examples


EMA guideline Appendix I (non‑exhaustive) includes situations like: dosing a subject with the wrong drug or dose (e.g. incorrect IMP, wrong frequency); giving IMP to a pregnant subject without protocol-required test; systematically failing to administer required therapy. Other examples: temperature excursions unverified by corrective action, severe informed consent lapses, etc. These all have significant safety/data impact.

FDA guidance examples highlight failures that impair safety or data. For instance: skipping key safety assessments (e.g. missing lab tests or not administering study drug per spec); giving a prohibited concomitant treatment; failure to obtain valid informed consent or protect private data; dosing errors (wrong treatment or dose); non‑adherence to randomization; enrolling ineligible subjects or missing primary endpoint data; or unblinding inappropriately. These deviations affect subject protection or critical data (as listed in the FDA document).

Consequences/Implications


Regulatory action: Reporting a serious breach obligates Member States to evaluate it. Some breaches may require corrective actions overseen by regulators; others may trigger inspections or trial suspension. Failure to have a proper breachreporting system can itself be a GCP inspection finding. If serious breaches reveal fundamental trial defects (e.g. safety is compromised), the trial approval or application may be withdrawn. In summary, a serious breach can lead to heightened regulatory scrutiny, mandatory CAPA plans, or more severe sanctions.

Data validity risks: The FDA guidance notes that frequent or severe important deviations can compromise a trial’s validity. Reviewers may judge a study “not adequate and well‑controlled” if key deviations occur (e.g. wrong enrollment or missing data). As a result, sponsors are urged to design protocols to minimize these issues (via “quality‑by‑design”). Important deviations do not immediately trigger FDA enforcement, but they must be documented; pervasive deviations could lead FDA to question the reliability of submitted data. Investigators/IRBs also review important deviations for participant safety.

Key differences: A serious breach is a legal CT regulation concept requiring prompt notification to EU authorities; by contrast, an important protocol deviation is a statistical/GCP concept from FDA/ICH guidance, focused on documenting major protocol departures in the trial record. The EMA rule imposes timely reporting to regulators (CTIS) for serious breaches, whereas the FDA guidance places emphasis on internal reporting and documentation (investigator→sponsor/IRB, sponsor→study report) rather than immediate FDA notification.

References:

Friday, April 25, 2025

Classifying Protocol Deviations - New FDA Guidance on Protocol Deviations

In clinical trials, protocol deviations are generally unintentional departures from the IRB-approved protocol and are commonly not discovered until after they occur (e.g., an investigator’s failure to perform a protocol-required test is discovered by the study monitor during a routine monitoring visit).

Too many protocol deviations could indicate the protocol issue (the protocol is poorly written), the protocol compliance by the investigators, and the overall quality of the study. Protocol deviations are collected and reviewed throughout the study. At the end of the study, the final list of protocol deviations will be converted into the data set (CDISC SDTM DV data set). The protocol deviations will then be listed and summarized for the inclusion in the clinical study report. According to ICH E3 "Structure and Contents of Clinical Study Reports", protocol deviations will be described in the CSR section 10.3.


For NDA/BLA submissions to CDER, FDA, there is a special requirement for submitting the BIMO (bioresearch monitoring) data sets and listings to assist FDA to select the investigational sites for inspection. Protocol deviations are important part of the BIMO data. According to "BIORESEARCH MONITORINGTECHNICAL CONFORMANCE GUIDE", the protocol deviations need to be provided in listings by subject and by clinical site. 

7. Protocol Deviations 

This by-subject, by-clinical site listing should include all protocol deviations. The listing should include a description of the deviation and identify whether the sponsor considered the deviation to be an important or non-important protocol deviation. 

Protocol deviations need to be classified based on the seriousness of the protocol deviation and the impact on the safety and efficacy evaluation of the protocol deviations. Traditionally, we have classified the protocol deviations into minor and major categories. However, other terms are used for categorizing the protocol deviations: critical protocol deviations, significant protocol deviations, ... 

FDA's new guidance "Protocol Deviations for Clinical Investigations of Drugs, Biological Products, and Devices" (December 2024) sets this straight. Only two categories will be used for protocol deviations: Important protocol deviations and other protocol deviations. 

1. Important Protocol Deviations 

As noted above, in this guidance an important protocol deviation is a subset of protocol deviations that might significantly affect the completeness, accuracy, and/or reliability of the study data or that might significantly affect a subject’s rights, safety, or well-being. While other terms such as major, critical, and significant have sometimes been used to classify such protocol deviations, FDA recommends using important to encompass all these terms. 

2. All Other Protocol Deviations 

All other protocol deviations that do not meet the definition of an important protocol deviation may encompass the commonly used terms minor, noncritical, and non-significant deviations. Examples of all other protocol deviations may include small deviations from protocol-specified visit windows; a signed consent with a page missing a participant’s initial; or failure to perform a study procedure not relevant for safety monitoring or not related to an important study efficacy endpoint (e.g., primary or secondary endpoints). 

Saturday, February 15, 2025

Sponsor’s Response to DMC Recommendation: A Case Study from a Phase 2b/3 Trial

 The DMC or DSMB is now commonly used in the clinical trials, especially the late phase clinical trials. According to FDA guidance for industry "Use of Data Monitoring Committees in Clinical Trials", DMC Responsibilities include:

1.             1.    Monitoring of Trial Conduct

2.      Monitoring of Results of Interim Analysis of Trial Data

·         Safety – to determine if there is a credibly increased risk of a serious adverse outcome in subjects receiving the investigational product, indicating that enrollment should be stopped. To determine a safety risk, review of unblinded efficacy data should also be conducted by the DMC as they evaluate a benefit-risk assessment

·         Implementing a predefined adaptive feature

                                                              i.      Efficacy – to determine if there is statistically significant evidence of efficacy such that enrollment should be stopped

                                                            ii.      Futility – to determine if there is no longer a reasonable likelihood that the trial will reach a conclusion of effectiveness, so that enrollment should be stopped to protect subjects from further exposure to a potentially ineffective investigational product and to conserve resources

                                                          iii.      Other adaptations – a DMC or a separate adaptation committee should determine if a prespecified adaptive aspect of the trial design is to be implemented. This can include modifying the sample size, changing a randomization ratio, or restricting future enrollment to a prespecified subgroup (adaptive enrichment

3.    Consideration of External Data

4.    Recommendations and Documentation

When a DMC is established, a DMC charter will be established to describe DMC Obligations, Responsibilities, and Standard Operating Procedures. DMC charter may also specify if there is any stopping rules to implement, decision trees to be followed, and any adaptation rules to be implemented. 

DMC communicates with the sponsor through the DMC recommendations. The FDA guidance has the following about the DMC recommendations:

A fundamental responsibility of a DMC is to make recommendations to the sponsor concerning the continuation of the trial.  Most frequently, a DMC’s recommendation after an interim review is for the trial to continue as designed.  Other less frequent but possible recommendations, however, as discussed previously, include trial termination, trial continuation with major or minor modifications (such as implementation of prespecified adaptive elements), or temporary suspension of enrollment and/or trial intervention until an identified uncertainty is resolved. 

A DMC should express its recommendations clearly to the sponsor because a DMC’s actions potentially affect the safety of trial subjects.  Both a written recommendation and an oral communication, with opportunity for questions and discussion, can be valuable.  Recommendations for modifications are best accompanied by the minimum amount of data critical for the sponsor to make a reasonable decision about the recommendation, and the rationale for such recommendations should be as clear and precise as possible.  Sponsors may wish to develop internal procedures to limit the interim data released by a DMC after a recommendation and until a decision is made regarding acceptance or rejection of the recommendation in order to help maintain confidentiality of the interim results should the trial continue.  We recommend that a DMC document its recommendations and rationale in a manner that can be reviewed by the sponsor and then circulated, as appropriate, to IRBs, FDA, and/or other interested parties, when based on interim data.  Major trial changes—such as early trial termination, change in population or entry criteria, or change in trial endpoints—can have substantial impact on the validity of the trial and/or its ability to support the desired regulatory decision.  Sponsors should discuss with FDA any proposed protocol changes based on review of interim data that were not planned for, before implementation, and submit such changes to FDA in accordance with 21 CFR 312.30 and 812.35.  However, if the sponsor learns of information that presents an imminent safety hazard to trial participants, sponsors should implement the necessary changes as quickly as possible to ensure the safety and welfare of study subjects (see 21 CFR 312.30(b)(2)(ii) and 812.35(a)(2)). 

In most of situation, the study is as expected and it is easy for DMC to make a recommendation of no changes to the study. However, in complicated situation, the DMC needs to make tough decision and recommend the termination of the study. In a paper by Wittes et al "The Data Monitoring Committee: A Collective or a Collection?", the following suggestions of consensus operating were made:

In a typical DMC meeting, data emerge as expected. No worrisome safety concern arises; the efficacy data are not surprising; and the DMC deems that trial is progressing as planned with, perhaps, some lag in recruitment and a less than desirable rate of follow-up of participants and incomplete capture of important efficacy and safety data. These and other quality metrics affect decision-making and the DMC may discuss them with study leadership. When, however, evidence of an unexpected harm arises, or the study operations appear unacceptable, or efficacy appears much different from anticipated, the deliberations of the DMC may reveal initial, perhaps strong, differences of opinion. A requirement to vote may curtail discussion and may lead to the failure to produce a recommendation that all find acceptable. Instead, we agree with those who urge DMCs to operate by consensus . Operating by consensus means that a DMC can have an odd or even number of members. Prior to reaching consensus, the Chair may elicit the opinion of each member to gauge the general views of members of the DMC or even call an informal straw vote. Regardless of how the DMC reached consensus, all members should agree to the language summarizing its recommendations....

This week, we saw an interesting example how the DMC recommendation was received and handled by the sponsor. Apparently, the sponsor did not trust the DMC's recommendation of stopping the study. The sponsor is now assembling an expert panel to review the unblinded data to determine how the DMC's recommendation is made and what are the rationales for DMC's recommendation.

Pliant Therapeutics announcedthat their phase 2b/3 study in IPF was suspended per DMC’s recommendation.  

          Pliant Brings in Outside Experts to Review IPF Study Pause 

          Pliant Therapeutics  has initiated assembly of outside panel of world-renowned experts to review 

          BEACON-IPF trial dataAnnounces Next Steps Following DSMB

SOUTH SAN FRANCISCO, Calif., Feb. 13, 2025 (GLOBE NEWSWIRE) -- Pliant Therapeutics, Inc. (Nasdaq: PLRX) today announced that, per the charter of the trial’s independent Data Safety Monitoring Board (DSMB), the Company has initiated the assembly of an outside expert panel to review unblinded data from the ongoing BEACON-IPF Phase 2b trial of bexotegrast in patients with idiopathic pulmonary fibrosis (IPF). The panel, consisting of world-renowned experts in pulmonary diseases and biostatistics, will provide an independent recommendation to Pliant regarding the BEACON-IPF trial. Subsequently, the panel will serve as part of an expanded DSMB with the goal to reach a consensus recommendation regarding BEACON-IPF. The decision to assemble the outside panel was taken as the Company has not been able, through review of blinded data, to determine the rationale for the DSMB’s recommendation to pause enrollment and dosing in the trial. The Company expects this process to conclude in two to four weeks.

Following the DSMB’s previously announced recommendation, Pliant voluntarily paused enrollment and dosing in the BEACON-IPF clinical trial. Pliant is committed to remaining blinded ensuring the data integrity of the BEACON-IPF 2b clinical trial with the goal of maintaining its potential to serve as a registrational trial.

It is very likely that the DMC focuses on the safety review and follows the FDA guidance which says:

Safety – to determine if there is a credibly increased risk of a serious adverse outcome in subjects receiving the investigational product, indicating that enrollment should be stopped. To determine a safety risk, review of unblinded efficacy data should also be conducted by the DMC as they evaluate a benefit-risk assessment

There may be an imbalance in the number of deaths or serious adverse events and there may no clear indication of efficacy. In such cases, the DMC's recommendation to stop the trial is based on a careful benefit-risk assessment. 

Friday, February 14, 2025

Understanding Mis-Stratification in Randomized Controlled Clinical Trials

Stratified randomization is a common practice in randomized, controlled clinical trials. It ensures that key characteristics are evenly distributed across treatment groups and the treatment assignments are balanced within each randomization stratum, enhancing the validity of study results. However, during the course of a trial, mis-stratification can occur—this happens when an incorrect stratification stratum is used during randomization. Let's explore what this means, why it happens, and how it impacts clinical trials.


What is Mis-Stratification?

In clinical trials, stratification factors (e.g., age, disease severity, disease subgroup, or background medication use) are used to group participants before randomization. Stratified randomization is used to ensure that equal numbers of subjects with one or more characteristic(s) thought to affect the treatment outcome in efficacy measure will be allocated to each comparison group. Mis-stratification - a type of randomization errors, occurs when:

  • An incorrect stratification factor is used for randomization, or
  • A participant is placed in the wrong stratum due to an error.

Despite this, the treatment assignment and drug dispensation remain accurate, making it a minor deviation rather than a critical error. When mis-stratification occurs, the random code and the treatment assignment is pulled from an incorrect stratum. 


Historical Approach: Intention-to-Treat Principle

Traditionally, clinical trials have adhered to the Intention-to-Treat (ITT) principle, where participants are analyzed according to the group they were originally randomized to, regardless of any errors. This approach maintains the integrity of the randomization process.

In practice, this means using the original stratification data—even if incorrect—in the statistical analysis. A typical Statistical Analysis Plan (SAP) might state:

“All original stratification information used in the randomization procedure will be used for analyses, regardless of whether it was later found to be incorrect. All efficacy analyses will be performed primarily on the ITT Population.”

This approach minimizes bias and reflects the 'real-world' impact of treatment. However, in the mis-stratification situation, using the incorrect stratum information in analyses may be too harsh and too strict unnecessarily.


Why Does Mis-Stratification Occur?

Mis-stratification can result from several factors, including:

  • Too Many Stratification Factors: More factors increase the complexity and likelihood of error.
  • Local vs. Central Lab Results: Differences between local and central lab measurements can lead to misclassification.
  • Timing of Measurement: Stratification factors measured at different times (baseline vs. screening) may not align.
  • Medication Use: Stratifying by prior or concomitant medication use can be complicated by variations in patient reporting or prescription practices.

These issues highlight potential flaws in protocol design and study quality, emphasizing the need for clear definitions and consistent procedures.


Regulatory Perspective: FDA Guidance

The FDA's guidance document, Adjusting for Covariates in Randomized Clinical Trials for Drugs and Biological Products,” provides clarity on handling mis-stratification:

“Randomization is often stratified by baseline covariates. A covariate adjustment model should generally include strata variables and can also include covariates not used for stratifying randomization. In some cases, incorrect stratification may occur and result in actual and as-randomized baseline strata variables. A covariate adjustment model can use either strata variable definition as long as this is prespecified.  “

This guidance supports the use of either the originally assigned stratification or the actual baseline data in the analysis, provided it is specified before data unblinding. This flexibility helps maintain the study's validity while addressing stratification errors transparently.


Impact on Study Results

Mis-stratification is generally considered a minor deviation because its impact on efficacy and safety analyses is minimal. It does not affect treatment assignment or drug dispensation but only the stratum from which the assignment was drawn.

When incorrect stratification occurs, the actual stratification information is collected in the Electronic Data Capture (EDC) system and can be used in sensitivity analyses to evaluate the robustness of the study results.

There is an article "Handling misclassified stratification variables in the analysis of randomised trials with continuous outcomes" where the authors did the simulation study to investigate the impact of the mis-stratification on the statistical analyses. 


Minimizing Mis-Stratification in Randomization

Too many mis-stratification errors indicate the poor quality of the clinical trial. To reduce the risk of mis-stratification, consider the following best practices:

  • Limit Stratification Factors: Use the minimum necessary factors to reduce complexity.
  • Consistent Measurement Timing: Align the timing of stratification factor measurements (e.g., always at baseline).
  • Clear Definitions: Ensure stratification criteria are clearly defined, identified or measured, and uniformly applied.
  • Training and Quality Checks: Provide thorough training for study personnel and implement rigorous quality checks.

Conclusion

While mis-stratification is not ideal, its impact on clinical trial results is usually minimal. By adhering to the Intention-to-Treat principle and following regulatory guidance, researchers can maintain the integrity of their analyses. As clinical trial designs become more complex, understanding and managing mis-stratification will continue to be crucial for maintaining study quality and reliability.

Monday, January 13, 2025

Prognostic enrichment versus predictive enrichment

Prognostic enrichment and predictive enrichment are both strategies used in clinical trials to select patients for inclusion. Both strategies aim to improve trial efficiency but address different aspects of clinical trial design.

Prognostic enrichment
Selects patients who are more likely to experience a disease-related event or condition. This strategy can help reduce the sample size required for event-driven trials.

Predictive enrichment
Selects patients who are more likely to benefit from a treatment or intervention based on a physiological or biological mechanism.




FDA guidance includes some detail examples of using prognostic enrichment strategies or predictive enrichment strategies. 

The differences between prognostic enrichment and predictive enrichment can be summarized as following:

Prognostic Enrichment

Predictive Enrichment

Definition

Selecting patients based on their likelihood of experiencing a specific outcome (e.g., disease progression or event) regardless of treatment.

Selecting patients based on their likelihood of responding to a specific treatment due to a biomarker or characteristic.

Goal

To increase the event rate or outcome frequency in the trial population, improving statistical power.

To identify patients who are more likely to benefit from the investigational treatment.

Focus

Focuses on the natural history of the disease or risk of an outcome.

Focuses on the interaction between the treatment and a specific patient characteristic (e.g., biomarker).

Patient Selection

Patients are selected based on prognostic factors (e.g., disease severity, biomarkers, or risk scores).

Patients are selected based on predictive factors (e.g., presence of a biomarker or genetic mutation).

Outcome

Increases the proportion of patients who experience the outcome of interest.

Increases the likelihood of observing a treatment effect in the selected population.

Example

Enrolling patients with advanced-stage cancer to ensure a higher rate of disease progression.

Enrolling severe patients who may be more likely to develop clinical worsening events

Enrolling only patients with a specific genetic mutation (genetic biomarker) that is targeted by the therapy.

Enrolling only patients in a specific etiology sub-group who are expected to be more responsive to the investigational treatment

Impact on Trial

Reduces sample size and trial duration by enriching for patients with a higher event rate.

Improves treatment effect size by focusing on patients who are more likely to respond.

Statistical Benefit

Increases statistical power by reducing variability in the control group.

Increases effect size by reducing heterogeneity in treatment response.

Risk

May exclude patients who could still benefit from the treatment.

May limit generalizability of trial results to a broader population.

Common Use Cases

Trials where the primary endpoint is time-to-event (e.g., survival, disease progression).

Trials where the treatment mechanism is targeted (e.g., precision medicine, biomarker-driven therapies).


Further Reading: 

Sunday, January 12, 2025

Survivorship Bias and its Occurrences in Clinical Trials

Survivorship bias or survival bias is the logical error of concentrating on entities that passed a selection process while overlooking those that did not. This can lead to incorrect conclusions because of incomplete data. Survivorship bias is a selection bias that occurs when an individual only considers the surviving observation without considering those data points that didn’t “survive” in the event. It can lead to incorrect or misleading conclusions.
 
Most famous example of survivorship bias came from the World War II time. During World War II, the high rate of planes being shot down posed a significant challenge. To address this, a team was tasked with improving the protection and armor of the aircraft to reduce their vulnerability. The team analyzed the planes that returned from missions, documenting the locations of damage and creating a damage map. They observed heavy damage on the wings and tail sections, leading them to reinforce these areas. However, despite these modifications, the rate of plane losses did not significantly decrease.

The breakthrough came when the team realized a critical oversight: they were only examining the planes that had successfully returned, meaning the damage they observed was survivable and not the kind that would cause a plane to be lost. This insight shifted their perspective, highlighting that the most vulnerable areas were likely those that hadn’t been hit on the returning planes—areas where damage would be fatal. By focusing on reinforcing these previously overlooked sections, the team was able to better protect the aircraft and pilots, ultimately saving lives. This story underscores the importance of considering not only the visible data but also the hidden or missing information, as what you don’t know can be just as critical as what you do know.









Survivorship Bias in Clinical Trials. 

Survivorship bias occurs when we focus only on the "survivors", "completers", "responders", or successful outcomes in a dataset while ignoring those that didn’t make it. This can lead to overly optimistic or inaccurate conclusions because the full picture isn’t being considered. In the context of clinical trials, survivorship bias can distort our understanding of a treatment’s effectiveness or safety by excluding data from participants who dropped out, didn’t respond to the treatment, or experienced adverse events.

Survivorship Bias are very common in clinical trials even though the term "survivorship bias" may not be explicitly used and may be described under "selection bias".

How Does Survivorship Bias Manifest in Clinical Trials?
  1. Dropout Rates and Missing Data
    Clinical trials often experience participant dropouts due to side effects, lack of efficacy, or personal reasons. If researchers only analyze data from participants who completed the trial, they may overestimate the treatment’s effectiveness or underestimate its risks. For example, a drug might appear highly effective because only the patients who benefited from it stayed in the trial, while those who didn’t respond or experienced severe side effects left.
  2. Selective Reporting
    Researchers or sponsors may inadvertently (or intentionally) focus on positive outcomes while downplaying or omitting negative results. This can create a skewed perception of a treatment’s success. For instance, if a trial reports only the patients who improved and ignores those who didn’t, the treatment may seem more promising than it actually is.
  3. Long-Term Follow-Up Gaps
    Many clinical trials focus on short-term outcomes, which can miss long-term effects. Patients who experience adverse events or relapse after the trial ends may not be included in the final analysis, leading to an incomplete understanding of the treatment’s safety and efficacy.
  4. Population Selection
    Clinical trials often exclude certain populations, such as older adults, pregnant women, or individuals with comorbidities. While this is sometimes necessary for safety or feasibility, it can create a biased sample that doesn’t reflect the real-world population. The "survivors" in this case are the participants who met the strict inclusion criteria, potentially limiting the generalizability of the results.

Real-World Consequences of Survivorship Bias

Survivorship bias in clinical trials can have serious implications for patients, healthcare providers, and policymakers. For example:

  • Overestimation of Treatment Efficacy: If a drug appears more effective than it truly is, patients may be prescribed a treatment that doesn’t work for them, wasting time and resources.
  • Underestimation of Risks: Ignoring data from participants who dropped out due to side effects can lead to an incomplete understanding of a treatment’s safety profile.
  • Misguided Policy Decisions: Policymakers relying on biased trial results may approve treatments that aren’t as effective or safe as they seem, potentially putting public health at risk.

How Can We Address Survivorship Bias?

  1. Intent-to-Treat Analysis or Treatment Policy Strategy in Estimand Framework
    One of the most effective ways to mitigate survivorship bias is to use an intent-to-treat (ITT) analysis or treatment policy strategy in handling the intercurrent event, which includes all participants who were randomized in the trial, regardless of whether they completed it. This approach provides a more realistic picture of the treatment’s effectiveness in real-world conditions.
  2. Avoid performing the analyses only on completers
  3. Encourage the patients with intercurrent events to remain in the study to minimize the dropouts
  4. Transparency in Reporting
    Researchers should report all outcomes, including dropouts, adverse events, and negative results. Journals and regulatory agencies can encourage this by requiring comprehensive data disclosure.
  5. Long-Term Follow-Up
    Extending the follow-up period can help capture long-term outcomes and provide a more complete understanding of a treatment’s benefits and risks.
  6. Diverse Participant Populations
    Including a broader range of participants, such as older adults and individuals with comorbidities, can improve the generalizability of trial results and reduce bias.
  7. Independent Oversight
    Independent review boards and data monitoring committees can help ensure that trials are conducted and analyzed objectively, minimizing the risk of bias.
Conclusion

Survivorship bias is a subtle but significant issue in clinical trials that can distort our understanding of medical treatments. By focusing only on the "survivors", "completers", "responders", or successful outcomes, we risk overlooking critical data that could impact patient care and public health. Addressing this bias requires a commitment to transparency, rigorous analysis, and inclusive research practices. As we continue to advance medical science, it’s essential to remember that what we see isn’t always the full picture—and that the missing pieces may hold the key to better, safer, and more effective treatments.

By being aware of survivorship bias and taking steps to mitigate it, we can ensure that clinical trials provide the most accurate and reliable evidence possible, ultimately improving outcomes for patients everywhere.

Further Reading: